Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Indoor intrusion detection based on direction-of-arrival estimation algorithm for single snapshot
REN Xiaokui, LIU Pengfei, TAO Zhiyong, LIU Ying, BAI Lichun
Journal of Computer Applications    2021, 41 (4): 1153-1159.   DOI: 10.11772/j.issn.1001-9081.2020071030
Abstract331)      PDF (1270KB)(525)       Save
Intrusion detection methods based on Channel State Information(CSI) are vulnerable to environment layout and noise interference, resulting in low detection rate. To solve this problem, an indoor intrusion detection method based on the algorithm of Direction-Of-Arrival(DOA) estimation for single snapshot was proposed. Firstly, the CSI data received by the antenna array was mathematically decomposed by combining the feature of spatial selective fading of the wireless signals, and the unknown DOA estimation problem was transformed into an over-complete representation problem. Secondly, the sparsity of the sparse signal was constrained by l1 norm, and the accurate DOA information was obtained by solving the sparse regularized optimization problem, so as to provide the reliable feature parameters for the final detection results at data level. Finally, the Indoor Safety Index Number(ISIN) was evaluated according to the DOA changes before and after the moments, and then indoor intrusion detection was realized. In the experiment, the method was verified by real indoor scenes and compared with traditional data preprocessing methods of principal component analysis and discrete wavelet transform. Experimental results show that the proposed method can accurately detect the occurrence of intrusion in different complex indoor environments, with an average detection rate of more than 98%, and has better performance in robustness compared to comparison algorithms.
Reference | Related Articles | Metrics
Stylistic multiple features mining based on attention network
WU Haiyan, LIU Ying
Journal of Computer Applications    2020, 40 (8): 2171-2181.   DOI: 10.11772/j.issn.1001-9081.2019122204
Abstract435)      PDF (1584KB)(673)       Save
To solve the problem that it is difficult to mine the features of different registers in large-scale corpus and it needs a lot of professional knowledge and manpower, a method to mine the features of distinguishing different registers automatically was proposed. First, the register was expressed as words, parts-of-speech, punctuations, and their bigrams, syntactic structure as well as multiple combined features. Then, the combination model of attention mechanism and Multi-Layer Perceptron (MLP) (i.e. attention network) was used to classify the registers into novel, news and textbook. And, the important features that were able to help to distinguish the registers were automatically extracted in this process. Finally, through the further analysis of these features, the characteristics of different registers and some linguistic conclusions were obtained. Experimental results show that novel, news, and textbook have significant differences in words, topic words, word dependencies, parts-of-speech, punctuations and syntactic structures, which implies that there will naturally present some diversity in the use of words, parts-of-speech, punctuations, and syntactic structures due to the different communication objects, purposes, contents, and environments when people utilize language.
Reference | Related Articles | Metrics
Low-resolution image recognition algorithm with edge learning
LIU Ying, LIU Yuxia, BI Ping
Journal of Computer Applications    2020, 40 (7): 2046-2052.   DOI: 10.11772/j.issn.1001-9081.2019112041
Abstract447)      PDF (6039KB)(337)       Save
Due to the influence of lighting conditions, shooting angles, transmission equipments and the surrounding environments, target objects in criminal investigation video images often have low-resolution, which are difficult to recognize. In order to improve the recognition rate of low-resolution images, on the basis of the classic LeNet-5 recognition network, a low-resolution image recognition algorithm based on adversarial edge learning was proposed. Firstly, the adversarial edge learning network was used to generate the fantasy edge of low-resolution image, which is similar to the edge of high-resolution image. Secondly, the edge information of this low-resolution image was fused into the recognition network as prior information for the recognition of the low-resolution image. Experiments were performed on three datasets:MNIST, EMNIST and Fashion-mnist. The results show that fusing the fantasy edge of low-resolution image into the recognition network can effectively increase the recognition rate of low-resolution images.
Reference | Related Articles | Metrics
High dynamic range imaging algorithm based on luminance partition fuzzy fusion
LIU Ying, WANG Fengwei, LIU Weihua, AI Da, LI Yun, YANG Fanchao
Journal of Computer Applications    2020, 40 (1): 233-238.   DOI: 10.11772/j.issn.1001-9081.2019061032
Abstract438)      PDF (1027KB)(282)       Save
To solve the problems of color distortion and local detail information loss caused by the histogram expansion of High Dynamic Range (HDR) image generated by single image, an imaging algorithm of high dynamic range image based on luminance partition fusion was proposed. Firstly, the luminance component of normal exposure color image was extracted, and the luminance was divided into two intervals according to luminance threshold. Then, the luminance ranges of images of two intervals were extended by the improved exponential function, so that the luminance of low-luminance area was increased, the luminance of high-luminance area was decreased, and the ranges of two areas were both expanded, increasing overall contrast of image, and preserving the color and detail information. Finally, the extended image and original normal exposure image were fused into a high dynamic image based on fuzzy logic. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that the proposed algorithm can effectively expand the luminance range of image and keep the color and detail information of scene, and the generated image has better visual effect.
Reference | Related Articles | Metrics
Hyperspectral image unmixing algorithm based on spectral distance clustering
LIU Ying, LIANG Nannan, LI Daxiang, YANG Fanchao
Journal of Computer Applications    2019, 39 (9): 2541-2546.   DOI: 10.11772/j.issn.1001-9081.2019020351
Abstract776)      PDF (997KB)(333)       Save

In order to solve the problem of the effect of noise on the unmixing precision and the insufficient utilization of spectral and spatial information in the actual Hyperspectral Unmixing (HU), an improved unmixing algorithm based on spectral distance clustering for group sparse nonnegative matrix factorization was proposed. Firstly, the HYperspectral Signal Identification by Minimum Error (Hysime) algorithm for the large amount of noise existing in the actual hyperspectral image was introduced, and the signal matrix and the noise matrix were estimated by calculating the eigenvalues. Then, a simple clustering algorithm based on spectral distance was proposed and used to merge and cluster the adjacent pixels generated by multiple bands, whose spectral reflectance distances are less than a certain value, to generate the spatial group structure. Finally, sparse non-negative matrix factorization was performed on the basis of the generated group structure. Experimental analysis shows that for both simulated data and actual data, the algorithm produces smaller Root-Mean-Square Error (RMSE) and Spectral Angle Distance (SAD) than traditional algorithms, and can produce better unmixing effect than other advanced algorithms.

Reference | Related Articles | Metrics
Research on factors affecting quality of mobile application crowdsourced testing
CHENG Jing, GE Luqi, ZHANG Tao, LIU Ying, ZHANG Yifei
Journal of Computer Applications    2018, 38 (9): 2626-2630.   DOI: 10.11772/j.issn.1001-9081.2018030575
Abstract630)      PDF (807KB)(324)       Save
To solve the problem that the influencing factors of crowdsourced testing are complex and diverse, and the test quality is difficult to assess, a method for analyzing the quality influencing factors based on Spearman correlation coefficient was proposed. Firstly, the potential quality influencing factors were obtained through the analysis of test platforms, tasks, and testers. Secondly, Spearman correlation coefficient was used to calculate the correlation degrees between potential factors and test quality and to screen out key factors. Finally, the multiple stepwise regression was used to establish a linear evaluation relationship between key factors and test quality. The experimental results show that compared with the traditional expert artificial evaluation method, the proposed method can maintain smaller fluctuations in evaluation error when facing a large number of test tasks. Therefore, the method can accurately screen out the key influencing factors of mobile application crowdsourced test quality.
Reference | Related Articles | Metrics
Automatic cloud detection algorithm based on deep belief network-Otsu hybrid model
QIU Meng, YIN Haoyu, CHEN Qiang, LIU Yingjian
Journal of Computer Applications    2018, 38 (11): 3175-3179.   DOI: 10.11772/j.issn.1001-9081.2018041350
Abstract481)      PDF (996KB)(375)       Save
More than half of the earth's surface is covered by cloud. Current cloud detection methods from satellite remote sensing imageries are mainly manual or semi-automatic, depending upon manual intervention with low efficiency. Such methods can hardly be utilized in real-time or quasi real-time applications. To improve the availability of satellite remote sensing data, an automatic cloud detection method based on Deep Belief Network (DBN) and Otsu's method was proposed, named DOHM (DBN-Otsu Hybrid Model). The main contribution of DOHM is to replace the empirical fixed thresholds with adaptive ones, therefore achieve full-automatic cloud detection and increase the accuracy to greater than 95%. In addition, a 9-dimensional feature vector is adopted in network training. Diversity of the input feature vector helps to capture the characteristics of cloud more effectively.
Reference | Related Articles | Metrics
Privacy-preserving equal-interval approximate query algorithm in two-tiered sensor networks
WANG Taochun, CUI Zhuangzhuang, LIU Ying
Journal of Computer Applications    2017, 37 (9): 2563-2566.   DOI: 10.11772/j.issn.1001-9081.2017.09.2563
Abstract600)      PDF (790KB)(410)       Save
Privacy preservation, a key factor in expanding the application of Wireless Sensor Network (WSN), is the current research hotspot. In view of the privacy of sensory data in WSN, Privacy-preserving Equal-Interval Approximate Query (PEIAQ) algorithm in two-tiered sensor networks based on data aggregation was proposed. Firstly, sensor node IDs and sensory data were concealed in a random vector, and then linear equations were worked out by the base station based on the random vector. As a result, a histogram containing global statistics was formed, and finally the results of approximate query were obtained. In addition, sensory data were encrypted through perturbation technique and sharing key between the sensor node and base station, which can ensure the privacy of sensory data. Simulation experiments show that the PEIAQ has a 60% decrease approximately in the traffic compared with PGAQ (Privacy-preserving Generic Approximate Query) in the query phase. Therefore, PEIAQ is efficient and costs low-energy.
Reference | Related Articles | Metrics
Research and application for terminal location management system based on firmware
SUN Liang, CHEN Xiaochun, ZHENG Shujian, LIU Ying
Journal of Computer Applications    2017, 37 (2): 417-421.   DOI: 10.11772/j.issn.1001-9081.2017.02.0417
Abstract607)      PDF (848KB)(547)       Save
Pasting the Radio Frequency Identification (RFID) tag on the shell of computer so that to trace the location of computer in real time has been the most frequently used method for terminal location management. However, RFID tag would lose the direct control of the computer when it is out of the authorized area. Therefore, the terminal location management system based on the firmware and RFID was proposed. First of all, the authorized area was allocated by RFID radio signal. The computer was allowed to boot only if the firmware received the authorized signal of RFID on the boot stage via the interaction between the firmware and RFID tag. Secondly, the computer could function normally only if it received the signal of RFID when operation system is running. At last, the software Agent of location management would be protected by the firmware to prevent it from being altered and deleted. The scenario of the computer out of the RFID signal coverage would be caught by the software Agent of the terminal; and the terminal would then be locked and data would be destroyed. The terminal location management system prototype was deployed in the office area to control almost thirty computers so that they were used normally in authorized areas and locked immediately once out of authorized areas.
Reference | Related Articles | Metrics
Evaluation model of mobile application crowdsourcing testers
LIU Ying, ZHANG Tao, LI Kun, LI Nan
Journal of Computer Applications    2017, 37 (12): 3569-3573.   DOI: 10.11772/j.issn.1001-9081.2017.12.3569
Abstract458)      PDF (937KB)(622)       Save
Mobile application crowdsourcing testers are anonymous, non-contractual, which makes it difficult for task publishers to accurately evaluate the ability of crowdsourcing testers and quality of test results.To solve these problems, a new evaluation model of Analytic Hierarchy Process (AHP) for mobile application crowdsouring testers was proposed. The ability of crowdsourcing testers was evaluated comprehensively and hierarchically by using the multiple indexes, such as activity degree, test ability and integrity degree. The combination weight vector of each level index was calculated by constructing the judgment matrix and consistency test. Then, the proposed model was improved by introducing the requirement list and description list, which made testers and crowdsourcing tasks match better. The experimental results show that the proposed model can evaluate the ability of testers accurately, support the selection and recommendation of crowdsourcing testers based on the evaluation results, and improve the efficiency and quality of mobile application crowdsourcing testing.
Reference | Related Articles | Metrics
Fast intra-depth decision algorithm for high efficiency video coding
LIU Ying, GAO Xueming, LIM Kengpang
Journal of Computer Applications    2016, 36 (10): 2854-2858.   DOI: 10.11772/j.issn.1001-9081.2016.10.2854
Abstract414)      PDF (780KB)(387)       Save
To reduce the high computational complexity of intra coding in High Efficiency Video Coding (HEVC), a fast intra depth decision algorithm for Coding Unit (CU) based on spatial correlation property of images was proposed. First, the depth of the current Coded Tree Unit (CTU) was estimated by linearly weighting the adjacent CTU depths. Then appropriate double thresholds were set to terminate the CTU splitting process or skip some depths of CTU, thereby reducing unnecessary depth calculation. Experimental results show that compared with HM12.0, the proposed intra-depth decision optimization algorithm can significantly decrease the coding time of simple video sequence with only negligible drop in quality, when the Y-PSNR dropped by an average of 0.02 dB, the encoding time is reduced by an average of 34.6%. Besides, the proposed algorithm is easy to be fused with other methods to further reduce the computational complexity for HEVC intra coding, and ultimately achieves the purpose of real-time transmission of high-definition video.
Reference | Related Articles | Metrics
Improved wavelet denoising with dual-threshold and dual-factor function
REN Zhong LIU Ying LIU Guodong HUANG Zhen
Journal of Computer Applications    2013, 33 (09): 2595-2598.   DOI: 10.11772/j.issn.1001-9081.2013.09.2595
Abstract659)      PDF (632KB)(459)       Save
Since the traditional wavelet threshold functions have some drawbacks such as the non-continuity on the points of threshold, large deviation of estimated wavelet coefficients, Gibbs phenomenon and distortion are generated and Signal-to-Noise Ratio (SNR) can be hardly improved for the denoised signal. To overcome these drawbacks, an improved wavelet threshold function was proposed. Compared with the soft, hard, semi-soft threshold function and others, this function was not only continuous on the points of threshold and more convenient to be processed, but also was compatible with the performances of traditional functions and the practical flexibility was greatly improved via adjusting dual threshold parameters and dual variable factors. To verify this improved function, a series of simulation experiments were performed, the SNR and Root-Mean-Square Error (RMSE) values were compared between different denoising methods. The experimental results demonstrate that the smoothness and distortion are greatly enhanced. Compared with soft function, its SNR increases by 22.2% and its RMSE decreases by 42.6%.
Related Articles | Metrics
Method of improving passive positioning accuracy based on Beidou satellite
ZHANG Yu LIU Ying HE Qiurui
Journal of Computer Applications    2013, 33 (03): 611-613.   DOI: 10.3724/SP.J.1087.2013.00611
Abstract1038)      PDF (428KB)(695)       Save
Beidou satellite has been used as external illuminator due to its features such as continuous coverage to our country, less motion relative to the earth, unapparent Doppler frequency, simple disturbance of adjacent channel and high security. Taking account of the location error caused by atmospheric refraction, a location error correction method was given, to further improve the position accuracy. The simulation results show that radar position errors could reduce correspondingly with the increase of elevation or the decrease of target altitude. Passive radar position accuracy is improved greatly by correcting atmospheric refraction.
Reference | Related Articles | Metrics
Application of prior-knowledge-bearing learning machine in biological sequence analysis
LIU Ying,LIN Yuan-lie,QIN Zheng
Journal of Computer Applications    2005, 25 (09): 2169-2170.   DOI: 10.3724/SP.J.1087.2005.02169
Abstract1270)      PDF (130KB)(776)       Save
Biological sequence analysis is an important application domain of data mining technology.Its particularity lies in that a great deal of prior knowledge can be utilized to improve the learning process.In the research of the protein modification of N-acetylation,by properly using prior knowledge and upgrading the pattern extraction method,improvement in performance of the SVM model was achieved.
Related Articles | Metrics
Duplicated fault tolerance system based on active/standby fast-switching
WU Juan, MA Yong-qiang,LIU Ying
Journal of Computer Applications    2005, 25 (08): 1948-1951.   DOI: 10.3724/SP.J.1087.2005.01948
Abstract1060)      PDF (204KB)(1273)       Save
To shorten the time-delay of active/standby switching in the fault tolerance system, an active/standby fast-switching system was designed. In this system, the active and the standby used the same IP and MAC address in order to receive the network data simultaneously, however, the standby was prohibited to send any network data, both processes were transparent to upper layers.
Related Articles | Metrics
Access control model of 2-D spatial region based on spatial index
LIU Ying, ZHANG Shu-guang
Journal of Computer Applications    2005, 25 (06): 1277-1278.   DOI: 10.3724/SP.J.1087.2005.1277
Abstract942)      PDF (97KB)(882)       Save
Aim at the characters of spatial data access control, the concept of spatial region access control based on spatial index was proposed. The 2-D access control model of spatial data was presented. The authorization and access request rules and constraints of spatial region access control were defined, and the method of spatial data access control was described in detail.
Related Articles | Metrics